A Practical Guide for Economists
The Ohio State University
Part I
AI in Your Research
For the following use cases, discuss whether you think AI is appropriate to use as part of your research:
Part II
My Thoughts on AI
AI has made it virtually costless to write grammatically correct prose with no spelling errors.
AI has given everyone a (near) Ph.D.-level coding assistant and data analyst.
What is the equilibrium response for economics job market candidates?
Today’s focus: Building an AI-augmented research infrastructure
1. Learn the fundamentals — You can’t evaluate code you don’t understand. AI will confidently give you wrong answers—you need to catch them. Think of AI as a fast RA who occasionally hallucinates.
2. But don’t fall behind — Your peers are using these tools. The productivity gap compounds quickly. “I prefer the old way” is not a competitive strategy.
3. The old system was imperfect — Research errors happened before AI—and were caught late or not at all. Better workflows + AI review can catch errors earlier.
The goal isn’t “AI vs. no AI”—it’s building systems that make research more robust, faster, and more replicable.
Case 1: Reinhart & Rogoff (2010) — An Excel formula didn’t include 5 rows of data. Paper claimed high debt causes -0.1% growth; corrected: +2.2% growth. Used to justify global austerity. Found by a grad student, 3 years later.
Case 2: Deschênes & Greenstone (2007 AER) — Coding errors and missing weather data in climate-agriculture analysis. Paper claimed climate change would increase ag profits; corrected results showed the opposite. Found 5 years later via replication.
Part II
Building Your Stack
Work from OneDrive, Dropbox, or Google Drive—not your local machine alone. A consistent folder structure makes AI tools more effective because they can understand your project organization. This also enables collaboration and automatic backup simultaneously.
project-folder/
├── data/
│ ├── raw/
│ └── clean/
├── code/
│ ├── 01-clean.R
│ ├── 02-analysis.R
│ └── 03-figures.R
├── output/
├── paper/
└── README.md
No more analysis_v3_final_FINAL2.R — Git tracks it all.
- model <- lm(y ~ x1 + x2, data = df)
+ model <- lm(y ~ x1 + x2 + x1:x2, data = df) # added interactionConfused on how to set up Git and GitHub? Ask AI!
What: Terminal-based, agentic coding assistant
Best for: R and data analysis workflows, file operations & project setup, autonomous multi-step tasks, Git integration
Runs from command line, can execute code and modify files directly.
What: AI-powered code editor (VS Code fork)
Best for: Iterative code editing, larger software projects, inline suggestions while typing, tab completion on steroids
Full IDE experience with AI built in.
Basic Claude Code is powerful, but can be unpredictable.
Structured workflows add:
my-project/
├── CLAUDE.md # Project "constitution"
├── .claude/
│ └── rules/
│ ├── r-code-conventions.md # R coding standards
│ ├── quality-gates.md # 80/90/95 scoring
│ └── plan-first-workflow.md # Plan → Approve → Implement
├── code/
├── data/
│ ├── raw/
│ └── clean/
└── output/
CLAUDE.md is read at the start of every session — Claude knows your conventions automatically.
The pattern:
Think of it like hiring a contractor: discuss the plan, approve it, let them work, review the result.
Every file gets a score (0-100). Scores below threshold block the action:
| Score | Meaning |
|---|---|
| 80+ | Commit allowed |
| 90+ | Ready for PR/sharing |
| 95+ | Publication quality |
What gets scored:
The workflow:
Code (R/Python) → Figures & Tables → GitHub Repo → Overleaf Paper
One integrated system: Code changes automatically flow through to your paper.
AI can hallucinate packages and functions — It will confidently suggest code using libraries that don’t exist. Always run and verify.
Don’t let AI do your economic thinking — It’s a tool, not a co-author. Identification, intuition, and interpretation are yours.
Always test AI-generated code — Run it. Check edge cases. Verify results make sense.
Be thoughtful about sensitive data — Know what data you’re sending to AI services. Check your IRB and data use agreements.
Don’t skip the planning step — Review Claude’s plan before implementation. Easier to fix a wrong plan than wrong code.
| Resource | Description |
|---|---|
| Kevin Bryan’s “Tech Stack” | Modern research tools for economists |
| Fernández-Villaverde’s Git Tutorial | Git tutorial for academic research |
| Claude Code Docs | Official Claude Code documentation |
| Cursor | AI-first code editor |
| Rudik’s workflow | Make + R structured workflow |
| Sant’Anna’s workflow | LaTeX + R + quality gates |